Goto

Collaborating Authors

 facial-recognition technology


Does A.I. Lead Police to Ignore Contradictory Evidence?

The New Yorker

After the bus driver ordered him to observe a rule requiring passengers to wear face masks, he approached the fare box and began arguing with her. "I hit bitches," he said, leaning over a plastic shield that the driver was sitting behind. When she pulled out her iPhone to call the police, he reached around the shield, snatched the device, and raced off. The bus driver followed the man outside, where he punched her in the face repeatedly. He then stood by the curb, laughing, as his victim wiped blood from her nose. By the time police officers canvassed the area, the assailant had fled, but the incident had been captured on surveillance cameras.


Opinion: How to counter China's scary use of artificial intelligence data

Boston Herald

Nowhere is the competition in developing artificial intelligence fiercer than in the accelerating rivalry between the United States and China. At stake in this competition is not just who leads in AI but who sets the rules for how it is used around the world. China is forging a new model of digital authoritarianism at home and is actively exporting it abroad. It has launched a national-level AI development plan with the intent to be the global leader by 2030. And it is spending billions on AI deployment, training more AI scientists and aggressively courting experts from Silicon Valley.


Cities Take the Lead in Setting Rules Around How AI Is Used

#artificialintelligence

Cities are looking at a number of solutions to these problems. Some require disclosure when an AI model is used in decisions, while others mandate audits of algorithms, track where AI causes harm or seek public input before putting new AI systems in place. What would you like to see cities do to make their use of AI more transparent and fair? It will take time for cities and local bureaucracies to build expertise in these areas and figure out how to craft the best regulations, says Joanna Bryson, a professor of ethics and technology at the Hertie School in Berlin. But such efforts could provide a model for other cities, and even nations that are trying to craft standards of their own, she says.


The IRS Should Stop Using Facial Recognition

The Atlantic - Technology

With tax season upon us, the IRS is pushing individuals to submit to facial recognition in exchange for being able to complete a range of basic tax-related activities online. The IRS has retained a private firm--ID.me The IRS is not the only government agency working with ID.me. The company claims to serve "27 states, multiple federal agencies, and over 500 name brand retailers." This is alarming for several reasons.


These high school students are fighting for ethical AI

#artificialintelligence

It's been a busy year for Encode Justice, an international group of grassroots activists pushing for ethical uses of artificial intelligence. There have been legislators to lobby, online seminars to hold, and meetings to attend, all in hopes of educating others about the harms of facial-recognition technology. It would be a lot for any activist group to fit into the workday; most of the team behind Encode Justice have had to cram it all in around high school. That's because the group was created and is run almost entirely by high schoolers. Its founder and president, Sneha Revanur, is a 16-year-old high-school senior in San Jose, California and at least one of the members of the leadership team isn't old enough to get a driver's license.


This $5 billion insurance company likes to talk up its AI. Now it's in a mess over it

#artificialintelligence

A key part of insurance company Lemonade's pitch to investors and customers is its ability to disrupt the normally staid insurance industry with artificial intelligence. It touts friendly chatbots like AI Maya and AI Jim, which help customers sign up for policies for things like homeowners' or pet health insurance, and file claims through Lemonade's app. And it has raised hundreds of millions of dollars from public and private market investors, in large part by positioning itself as an AI-powered tool. Yet less than a year after its public market debut, the company, now valued at $5 billion, finds itself in the middle of a PR controversy related to the technology that underpins its services. On Twitter and in a blog post on Wednesday, Lemonade explained why it deleted what it called an "awful thread" of tweets it had posted on Monday. Those now-deleted tweets had said, among other things, that the company's AI analyzes the videos that users submit when they file insurance claims for signs of fraud, picking up "non-verbal cues that traditional insurers can't, since they don't use a digital claims process."


Fighting algorithmic bias in artificial intelligence – Physics World

#artificialintelligence

Physicists are increasingly developing artificial intelligence and machine learning techniques to advance our understanding of the physical world but there is a rising concern about the bias in such systems and their wider impact on society at large. In 2011, during her undergraduate degree at Georgia Institute of Technology, Ghanaian-US computer scientist Joy Buolamwini discovered that getting a robot to play a simple game of peek-a-boo with her was impossible – the machine was incapable of seeing her dark-skinned face. Later, in 2015, as a Master's student at Massachusetts Institute of Technology's Media Lab working on a science–art project called Aspire Mirror, she had a similar issue with facial analysis software: it detected her face only when she wore a white mask. Buolamwini's curiosity led her to run one of her profile images across four facial recognition demos, which, she discovered, either couldn't identify a face at all or misgendered her – a bias that she refers to as the "coded gaze". She then decided to test 1270 faces of politicians from three African and three European countries, with different features, skin tones and gender, which became her Master's thesis project "Gender Shades: Intersectional accuracy disparities in commercial gender classification" (figure 1).


Clearview AI uses your online photos to instantly ID you. That's a problem, lawsuit says

#artificialintelligence

Clearview AI has amassed a database of more than 3 billion photos of individuals by scraping sites such as Facebook, Twitter, Google and Venmo. It's bigger than any other known facial-recognition database in the U.S., including the FBI's. The New York company uses algorithms to map the pictures it stockpiles, determining, for example, the distance between an individual's eyes to construct a "faceprint." This technology appeals to law enforcement agencies across the country, which can use it in real time to help determine people's identities. It also has caught the attention of civil liberties advocates and activists, who allege in a lawsuit filed Tuesday that the company's automatic scraping of their images and its extraction of their unique biometric information violate privacy and chill protected political speech and activity.


Facial-recognition research needs an ethical reckoning

#artificialintelligence

Cameras using facial-recognition technology in King's Cross, London, were taken down in 2019 after concerns were raised that they had been installed without appropriate consent or involvement of the data regulator.Credit: James Veysey/Shutterstock Over the past 18 months, a number of universities and companies have been removing online data sets containing thousands -- or even millions -- of photographs of faces used to improve facial-recognition algorithms. The pictures are classified as public data, and their collection didn't seem to alarm institutional review boards (IRBs) and other research-ethics bodies. But none of the people in the photos had been asked for permission, and some were unhappy about the way their faces had been used. This problem has been brought to prominence by the work of Berlin-based artist and researcher Adam Harvey, who highlighted how public data sets are used by companies to hone surveillance-linked technology -- and by the journalists who reported on Harvey's work. Many researchers in the fields of computer science and artificial intelligence (AI), and those responsible for the relevant institutional ethical review processes, did not see any harm in using public data without consent.


University of Miami Becomes Latest Battleground Over Facial Recognition

WSJ.com: WSJD - Technology

The University of Miami in recent days rebutted claims it uses facial-recognition technology after students accused campus police of using the tool to identify them at a protest related to the coronavirus pandemic. Two students claim UM's dean of students told a handful of campus protesters at a virtual meeting on Sept. 22 that they were identified at an unsanctioned demonstration using specialized software that analyzed camera footage of the event. "We walked away with the impression that they use facial recognition and they used a specific software to identify us," said Mars Fernandez, a doctoral student in counseling psychology and one of the students at the meeting. University officials subsequently denied campus police use facial-recognition technology except when collaborating with other law enforcement agencies in certain criminal investigations. UM Spokeswoman Jacqueline R. Menendez said in a statement Friday that the students from the Sept. 4 protest, many of whom wore masks, were identified using "basic investigative techniques."